forked from postgres/postgres
-
Notifications
You must be signed in to change notification settings - Fork 0
Sync Fork from Upstream Repo #159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We have outNode() coverage for all path nodes, but this one was missed when it was added.
The use of this function is limited to superusers and the code includes a hardcoded check for that. However, the code would look for the PGPROC entry to signal for the memory dump before checking if the user is a superuser or not, which does not make sense if we know that an error will be returned. Note that the code would let one know if a process was a PostgreSQL process or not even for non-authorized users, which is not the case now, but this avoids taking ProcArrayLock that will most likely finish by being unnecessary. Thanks to Julien Rouhaud and Tom Lane for the discussion. Discussion: https://postgr.es/m/YLxw1uVGIAP5uMPl@paquier.xyz
Add a note about asynchronous execution by postgres_fdw when applied to Append nodes that contain synchronous subplan(s) as well. Follow-up for commit 27e1f14. Andrey Lepikhov and Etsuro Fujita Discussion: https://postgr.es/m/58fa2aa5-07f5-80b5-59a1-fec8a349fee7%40postgrespro.ru
Fix handling of NULL host name (possibly by using hostaddr). It previously crashed. Also, we should look at connhost, not pghost, to handle multi-host specifications. Also remove an unnecessary SSL_CTX_free(). Reported-by: Jacob Champion <pchampion@vmware.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/504c276ab6eee000bb23d571ea9b0ced4250774e.camel@vmware.com
gram.y should discard NULL pointers (empty statements) when assembling a routine_body_stmt_list, as it does for other sorts of statement lists. Julien Rouhaud and Tom Lane, per report from Noah Misch. Discussion: https://postgr.es/m/20210606044418.GA297923@rfd.leadboat.com
Commit 8e03eb9 reverted a bit too much code, reintroducing one of the issues fixed by 39b66a9 - a page might have been left partially empty after relcache invalidation. Reported-By: Tom Lane Author: Masahiko Sawada Discussion: https://postgr.es/m/822752.1623032114@sss.pgh.pa.us Discussion: https://postgr.es/m/CAD21AoA%3D%3Df2VSw3c-Cp_y%3DWLKHMKc1D6s7g3YWsCOvgaYPpJcg%40mail.gmail.com
The FE/BE protocol identifies parameters with an Int16 index, which limits the maximum number of parameters per query to 65535. With batching added to postges_fdw this limit is much easier to hit, as the whole batch is essentially a single query, making this error much easier to hit. The failures are a bit unpredictable, because it also depends on the number of columns in the query. So instead of just failing, this patch tweaks the batch_size to not exceed the maximum number of parameters. Reported-by: Hou Zhijie <houzj.fnst@cn.fujitsu.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com
Protocol v2 was last used in PG 7.3, not 7.2. Reported-by: Tatsuo Ishii Discussion: https://postgr.es/m/20210608.091329.906837606658882674.t-ishii@sraoss.co.jp
PersistHoldablePortal has long assumed that it should store the entire output of the query-to-be-persisted, which requires rewinding and re-reading the output. This is problematic if the query is not stable: we might get different row contents, or even a different number of rows, which'd confuse the cursor state mightily. In the case where the cursor is NO SCROLL, this is very easy to solve: just store the remaining query output, without any rewinding, and tweak the portal's cursor state to match. Aside from removing the semantic problem, this could be significantly more efficient than storing the whole output. If the cursor is scrollable, there's not much we can do, but it was already the case that scrolling a volatile query's result was pretty unsafe. We can just document more clearly that getting correct results from that is not guaranteed. There are already prohibitions in place on using SCROLL with FOR UPDATE/SHARE, which is one way for a SELECT query to have non-stable results. We could imagine prohibiting SCROLL when the query contains volatile functions, but that would be expensive to enforce. Moreover, it could break applications that work just fine, if they have functions that are in fact stable but the user neglected to mark them so. So settle for documenting the hazard. While this problem has existed in some guise for a long time, it got a lot worse in v11, which introduced the possibility of persisting plpgsql cursors (perhaps implicit ones) even when they violate the rules for what can be marked WITH HOLD. Hence, I've chosen to back-patch to v11 but not further. Per bug #17050 from Алексей Булгаков. Discussion: https://postgr.es/m/17050-f77aa827dc85247c@postgresql.org
Further thought about bug #17050 suggests that it's a good idea to use CURSOR_OPT_NO_SCROLL for the implicit cursor opened by a plpgsql FOR-over-query loop. This ensures that, if somebody commits inside the loop, PersistHoldablePortal won't try to rewind and re-read the cursor. While we'd have selected NO_SCROLL anyway if FOR UPDATE/SHARE appears in the query, there are other hazards with volatile functions; and in any case, it's silly to expend effort storing rows that we know for certain won't be needed. (While here, improve the comment in exec_run_select, which was a bit confused about the rationale for when we can use parallel mode. Cursor operations aren't a hazard for nameless portals.) This wasn't an issue until v11, which introduced the possibility of persisting such cursors. Hence, back-patch to v11. Per bug #17050 from Алексей Булгаков. Discussion: https://postgr.es/m/17050-f77aa827dc85247c@postgresql.org
The set of subcommands supported by \dAp, \do and \dy was described incorrectly in psql's --help. The documentation was already consistent with the code. Reported-by: inoas, from IRC Author: Matthijs van der Vleuten Reviewed-by: Neil Chen Discussion: https://postgr.es/m/6a984e24-2171-4039-9050-92d55e7b23fe@www.fastmail.com Backpatch-through: 9.6
This only happens if (1) the new standby has no WAL available locally, (2) the new standby is starting from the old timeline, (3) the promotion happened in the WAL segment from which the new standby is starting, (4) the timeline history file for the new timeline is available from the archive but the WAL files for are not (i.e. this is a race), (5) the WAL files for the new timeline are available via streaming, and (6) recovery_target_timeline='latest'. Commit ee99427 introduced this logic and was an improvement over the previous code, but it mishandled this case. If recovery_target_timeline='latest' and restore_command is set, validateRecoveryParameters() can change recoveryTargetTLI to be different from receiveTLI. If streaming is then tried afterward, expectedTLEs gets initialized with the history of the wrong timeline. It's supposed to be a list of entries explaining how to get to the target timeline, but in this case it ends up with a list of entries explaining how to get to the new standby's original timeline, which isn't right. Dilip Kumar and Robert Haas, reviewed by Kyotaro Horiguchi. Discussion: http://postgr.es/m/CAFiTN-sE-jr=LB8jQuxeqikd-Ux+jHiXyh4YDiZMPedgQKup0g@mail.gmail.com
Per buildfarm member conchuela and Kyotaro Horiguchi, it's possible for the WAL segment that the cascading standby needs to be removed too quickly. Hopefully this will prevent that. Kyotaro Horiguchi Discussion: http://postgr.es/m/20210610.101240.1270925505780628275.horikyota.ntt@gmail.com
One of these functions is new in PostgreSQL 14; might as well start it out right.
Buildfarm member hamerkop has been reporting that two cases in connect/test5.pgc show different error messages than the test expects, because since commit ffa2e46 libpq's connection failure messages are exposing the fact that a GSS-encrypted connection was attempted and failed. That's pretty interesting information in itself, and I certainly don't wish to shoot the messenger, but we need to do something to stabilize the ECPG results. For the second of these two failure cases, we can add the gssencmode=disable option to prevent the discrepancy. However, that solution is problematic for the first failure, because the only unique thing about that case is that it's testing a completely-omitted connection target; there's noplace to add the option without defeating the point of the test case. After some thrashing around with alternative fixes that turned out to have undesirable side-effects, the most workable answer is just to give up and remove that test case. Perhaps we can revert this later, if we figure out why the GSS code is misbehaving in hamerkop's environment. Thanks to Michael Paquier for exploration of alternatives. Discussion: https://postgr.es/m/YLRZH6CWs9N6Pusy@paquier.xyz
Previously, we left the EPQ sub-executor alone until ExecEndLockRows. This caused any buffer pins or other resources that it might hold to remain held until ExecutorEnd, which in some code paths means that they are held till the Portal is closed. That can cause user-visible problems, such as blocking VACUUM; and it's unlike the behavior of ordinary table-scanning nodes, which will have released all buffer pins by the time they return an EOF indication. We can make LockRows work more like other plan nodes by calling EvalPlanQualEnd just before returning NULL. We still need to call it in ExecEndLockRows in case the node was not run to completion, but in the normal case the second call does nothing and costs little. Per report from Yura Sokolov. In principle this is a longstanding bug, but in view of the lack of other complaints and the low severity of the consequences, I chose not to back-patch. Discussion: https://postgr.es/m/4aa370cb91ecf2f9885d98b80ad1109c@postgrespro.ru
It turns out that worker.c's code path for TRUNCATE was also careless about establishing a snapshot while executing user-defined code, allowing the checks added by commit 84f5c29 to fail when a trigger is fired in that context. We could just wrap Push/PopActiveSnapshot around the truncate call, but it seems better to establish a policy of holding a snapshot throughout execution of a replication step. To help with that and possible future requirements, replace the previous ensure_transaction calls with pairs of begin/end_replication_step calls. Per report from Mark Dilger. Back-patch to v11, like the previous changes. Discussion: https://postgr.es/m/B4A3AF82-79ED-4F4C-A4E5-CD2622098972@enterprisedb.com
Commit 2453ea1 redefined pg_proc.proargtypes to include the types of OUT parameters, for procedures only. While that had some advantages for implementing the SQL-spec behavior of DROP PROCEDURE, it was pretty disastrous from a number of other perspectives. Notably, since the primary key of pg_proc is name + proargtypes, this made it possible to have multiple procedures with identical names + input arguments and differing output argument types. That would make it impossible to call any one of the procedures by writing just NULL (or "?", or any other data-type-free notation) for the output argument(s). The change also seems likely to cause grave confusion for client applications that examine pg_proc and expect the traditional definition of proargtypes. Hence, revert the definition of proargtypes to what it was, and undo a number of complications that had been added to support that. To support the SQL-spec behavior of DROP PROCEDURE, when there are no argmode markers in the command's parameter list, we perform the lookup both ways (that is, matching against both proargtypes and proallargtypes), succeeding if we get just one unique match. In principle this could result in ambiguous-function failures that would not happen when using only one of the two rules. However, overloading of procedure names is thought to be a pretty rare usage, so this shouldn't cause many problems in practice. Postgres-specific code such as pg_dump can defend against any possibility of such failures by being careful to specify argmodes for all procedure arguments. This also fixes a few other bugs in the area of CALL statements with named parameters, and improves the documentation a little. catversion bump forced because the representation of procedures with OUT arguments changes. Discussion: https://postgr.es/m/3742981.1621533210@sss.pgh.pa.us
We've accumulated quite a mix of instances of "an SQL" and "a SQL" in the documents. It would be good to be a bit more consistent with these. The most recent version of the SQL standard I looked at seems to prefer "an SQL". That seems like a good lead to follow, so here we change all instances of "a SQL" to become "an SQL". Most instances correctly use "an SQL" already, so it also makes sense to use the dominant variation in order to minimise churn. Additionally, there were some other abbreviations that needed to be adjusted. FSM, SSPI, SRF and a few others. Also fix some pronounceable, abbreviations to use "a" instead of "an". For example, "a SASL" instead of "an SASL". Here I've only adjusted the documents and error messages. Many others still exist in source code comments. Translator hint comments seem to be the biggest culprit. It currently does not seem worth the churn to change these. Discussion: https://postgr.es/m/CAApHDvpML27UqFXnrYO1MJddsKVMQoiZisPvsAGhKE_tsKXquw%40mail.gmail.com
We have a dozen PQset*() functions. PQresultSetInstanceData() and this were the libpq setter functions having a different word order. Adopt the majority word order. Reviewed by Alvaro Herrera and Robert Haas, though this choice of name was not unanimous. Discussion: https://postgr.es/m/20210605060555.GA216695@rfd.leadboat.com
Resolve the disagreement with nodes/*funcs.c field order in favor of the latter, which is better-aligned with the IndexStmt field order. This field is new in v14. Discussion: https://postgr.es/m/20210611045546.GA573364@rfd.leadboat.com
The list of options provided by the tab completion was outdated for the following commands: - ALTER SUBSCRIPTION - CREATE SUBSCRIPTION - ALTER PUBLICATION - CREATE PUBLICATION Author: Vignesh C Reviewed-by: Bharath Rupireddy Discussion: https://postgr.es/m/CALDaNm18oHDFu6SFCHE=ZbiO153Fx7E-L1MG0YyScbaDV--U+A@mail.gmail.com
The code added to mark replication slots invalid in commit c655077 had the race condition that a slot can be dropped or advanced concurrently with checkpointer trying to invalidate it. Rewrite the code to close those races. The changes to ReplicationSlotAcquire's API added with c655077 are not necessary anymore. To avoid an ABI break in released branches, this commit leaves that unchanged; it'll be changed in a master-only commit separately. Backpatch to 13, where this code first appeared. Reported-by: Andres Freund <andres@anarazel.de> Author: Andres Freund <andres@anarazel.de> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20210408001037.wfmk6jud36auhfqm@alap3.anarazel.de
Commit b663a41 introduced bulk inserts for FDW, but the handling of tuple slots turned out to be problematic for two reasons. Firstly, the slots were re-created for each individual batch. Secondly, all slots referenced the same tuple descriptor - with reasonably small batches this is not an issue, but with large batches this triggers O(N^2) behavior in the resource owner code. These two issues work against each other - to reduce the number of times a slot has to be created/dropped, larger batches are needed. However, the larger the batch, the more expensive the resource owner gets. For practical batch sizes (100 - 1000) this would not be a big problem, as the benefits (latency savings) greatly exceed the resource owner costs. But for extremely large batches it might be much worse, possibly even losing with non-batching mode. Fixed by initializing tuple slots only once (and reusing them across batches) and by using a new tuple descriptor copy for each slot. Discussion: https://postgr.es/m/ebbbcc7d-4286-8c28-0272-61b4753af761%40enterprisedb.com
Per 96540f8; the awkward API introduced by c655077 is no longer needed. Author: Andres Freund <andres@anarazel.de> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20210408020913.zzprrlvqyvlt5cyy@alap3.anarazel.de
Commit acb7e4e added a new implementation for PQsendQuery so that it works in pipeline mode (by using extended query protocol), but it behaves differently from the 'Q' message (in simple query protocol) used by regular implementation: the new one doesn't close the unnamed portal. Change the new code to have identical behavior to the old. Reported-by: Yura Sokolov <y.sokolov@postgrespro.ru> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/202106072107.d4i55hdscxqj@alvherre.pgsql
apply_handle_tuple_routing(), having detected and reported that the tuple it needed to update didn't exist, tried to update that tuple anyway, leading to a null-pointer dereference. logicalrep_partition_open() failed to ensure that the LogicalRepPartMapEntry it built for a partition was fully independent of that for the partition root, leading to trouble if the root entry was later freed or rebuilt. Meanwhile, on the publisher's side, pgoutput_change() sometimes attempted to apply execute_attr_map_tuple() to a NULL tuple. The first of these was reported by Sergey Bernikov in bug #17055; I found the other two while developing some test cases for this sadly under-tested code. Diagnosis and patch for the first issue by Amit Langote; patches for the others by me; new test cases by me. Back-patch to v13 where this logic came in. Discussion: https://postgr.es/m/17055-9ba800ec8522668b@postgresql.org
We were already reporting it, but only after the parallel workers were finished, which is visibly much later than what happens in a serial build. With this change we report it when the leader starts its own sort phase when participating in the build (the normal case). Now this might happen a little later than when the workers start their sorting phases, but a) communicating the actual phase start from workers is likely to be a hassle, and b) the sort phase start is pretty fuzzy anyway, since sorting per se is actually initiated by tuplesort.c internally earlier than tuplesort_performsort() is called. Backpatch to pg12, where the progress reporting code for CREATE INDEX went in. Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Author: Matthias van de Meent <boekewurm+postgres@gmail.com> Reviewed-by: Greg Nancarrow <gregn4422@gmail.com> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/1128176d-1eee-55d4-37ca-e63644422adb
This reverts commit 54fb8c7, as per the issues reported by fairywren when it comes to MinGW because of the lack of microsoft_native_stat() there. Using just stat() for MSVC is not sufficient to take care of the concurrency problems with files pending on deletion. It may be possible to paint some __MINGW64__ in the code to switch to a different implementation of stat() in this build context, but I am not sure either if relying on the implementation of stat() in MinGW to take care of the problems we are trying to fix is enough or not. So this needs more study. Discussion: https://postgr.es/m/YOvOlfRrIO0yGtgw@paquier.xyz Backpatch-through: 14
This should have been removed in commit 7e30c18, which split the loop into two. Only the first loop uses the 'from' variable; updating it in the second loop is bogus. It was never read after the first loop, so this was harmless and surely optimized away by the compiler, but let's be tidy. Backpatch to all supported versions. Author: Ranier Vilela Discussion: https://www.postgresql.org/message-id/CAEudQAoWq%2BAL3BnELHu7gms2GN07k-np6yLbukGaxJ1vY-zeiQ%40mail.gmail.com
The idea behind this patch is to design out bugs like the one fixed by commit 9d52311. Previously, once one did RelationOpenSmgr(rel), it was considered okay to access rel->rd_smgr directly for some not-very-clear interval. But since that pointer will be cleared by relcache flushes, we had bugs arising from overreliance on a previous RelationOpenSmgr call still being effective. Now, very little code except that in rel.h and relcache.c should ever touch the rd_smgr field directly. The normal coding rule is to use RelationGetSmgr(rel) and not expect the result to be valid for longer than one smgr function call. There are a couple of places where using the function every single time seemed like overkill, but they are now annotated with large warning comments. Amul Sul, after an idea of mine. Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
Apple's mechanism for dealing with functions that are available in only some OS versions confuses AC_CHECK_FUNCS, and therefore AC_REPLACE_FUNCS. We can use AC_CHECK_DECLS instead, so long as we enable -Werror=unguarded-availability-new. This allows people compiling for macOS to control whether or not preadv/pwritev are used by setting MACOSX_DEPLOYMENT_TARGET, rather than supplying a back-rev SDK. (Of course, the latter still works, too.) James Hilliard Discussion: https://postgr.es/m/20210122193230.25295-1-james.hilliard1@gmail.com
Allow a pager to be used by the \watch command. This works but isn't very useful with traditional pagers like "less", so use a different environment variable. The popular open source tool "pspg" (also by Pavel) knows how to display the output if you set PSQL_WATCH_PAGER="pspg --stream". To make \watch react quickly when the user quits the pager or presses ^C, and also to increase the accuracy of its timing and decrease the rate of useless context switches, change the main loop of the \watch command to use sigwait() rather than a sleeping/polling loop, on Unix. Supported on Unix only for now (like pspg). Author: Pavel Stehule <pavel.stehule@gmail.com> Author: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/CAFj8pRBfzUUPz-3gN5oAzto9SDuRSq-TQPfXU_P6h0L7hO%2BEhg%40mail.gmail.com
This fixes a theoretical bug in tuplesort.c which, if a bounded sort was used in combination with a byval Datum sort (tuplesort_begin_datum), when switching the sort to a bounded heap in make_bounded_heap(), we'd call free_sort_tuple(). The problem was that when sorting Datums of a byval type, the tuple is NULL and free_sort_tuple() would free the memory for it regardless of that. This would result in a crash. Here we fix that simply by adding a check to see if the tuple is NULL before trying to disassociate and free any memory belonging to it. The reason this bug is only theoretical is that nowhere in the current code base do we do tuplesort_set_bound() when performing a Datum sort. However, let's backpatch a fix for this as if any extension uses the code in this way then it's likely to cause problems. Author: Ronan Dunklau Discussion: https://postgr.es/m/CAApHDvpdoqNC5FjDb3KUTSMs5dg6f+XxH4Bg_dVcLi8UYAG3EQ@mail.gmail.com Backpatch-through: 9.6, oldest supported version
4146925 went to the trouble of removing a theoretical bug from free_sort_tuple by checking if the tuple was NULL before freeing it. Let's make this a little more robust by also setting the tuple to NULL so that should we be called again we won't end up doing a pfree on the already pfree'd tuple. Per advice from Tom Lane. Discussion: https://postgr.es/m/3188192.1626136953@sss.pgh.pa.us Backpatch-through: 9.6, same as 4146925
There's no point in checking if an INT8 sequence has a seqmin and seqmax value is outside the range of the minimum and maximum values for an int64 type. These both use the same underlying types so an INT8 certainly cannot be outside the minimum and maximum values supported by int64. This code is fairly harmless and it seems likely that most compilers would optimize it out anyway, never-the-less, let's remove it replacing it with a small comment to mention why the check is not needed. Author: Greg Nancarrow, with the comment revised by David Rowley Discussion: https://postgr.es/m/CAJcOf-c9KBUZ8ow_6e%3DWSfbbEyTKfqV%3DVwoFuODQVYMySHtusw%40mail.gmail.com
The name introduced by commit 4656e3d was agreed to be unreasonably long. To match this change, rename initdb's recently-added --clobber-cache option to --discard-caches. Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
"Result Cache" was never a great name for this node, but nobody managed to come up with another name that anyone liked enough. That was until David Johnston mentioned "Node Memoization", which Tom Lane revised to just "Memoize". People seem to like "Memoize", so let's do the rename. Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us Backpatch-through: 14, where Result Cache was introduced
The internals of the frontend-side callbacks for SASL are visible in libpq-int.h, but the header was not getting installed. This would cause compilation failures for applications playing with the internals of libpq. Issue introduced in 9fd8557. Author: Mikhail Kulagin Reviewed-by: Jacob Champion Discussion: https://postgr.es/m/05ce01d777cb$40f31d60$c2d95820$@postgrespro.ru
To add support for streaming transactions at prepare time into the built-in logical replication, we need to do the following things: * Modify the output plugin (pgoutput) to implement the new two-phase API callbacks, by leveraging the extended replication protocol. * Modify the replication apply worker, to properly handle two-phase transactions by replaying them on prepare. * Add a new SUBSCRIPTION option "two_phase" to allow users to enable two-phase transactions. We enable the two_phase once the initial data sync is over. We however must explicitly disable replication of two-phase transactions during replication slot creation, even if the plugin supports it. We don't need to replicate the changes accumulated during this phase, and moreover, we don't have a replication connection open so we don't know where to send the data anyway. The streaming option is not allowed with this new two_phase option. This can be done as a separate patch. We don't allow to toggle two_phase option of a subscription because it can lead to an inconsistent replica. For the same reason, we don't allow to refresh the publication once the two_phase is enabled for a subscription unless copy_data option is false. Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow Tested-By: Haiying Tang Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
Reported-By: Peter Eisentraut Backpatch-through: 14 Discussion: https://postgr.es/m/8f5e63b8-e8ed-0f80-d8c4-68222624c200@enterprisedb.com
Previously, we would send each line as a separate CopyData message. That's pretty wasteful if the table is narrow, as each CopyData message has 5 bytes of overhead. For efficiency, buffer up and pack 8 kB of input data into each CopyData message. The server also sends each line as a separate CopyData message in COPY TO STDOUT, and that's similarly wasteful. But that's documented in the FE/BE protocol description, so changing that would be a wire protocol break. Reviewed-by: Aleksander Alekseev Discussion: https://www.postgresql.org/message-id/40b2cec0-d0fb-3191-2ae1-9a3fe16a7e48%40iki.fi
Commit 0563a3a changed how partition constraints were generated such that this function no longer computes the mapping of parent attnos to child attnos. This is an external function that extensions could use, so this is potentially a breaking change. No external callers are known, however, and this will make it simpler to write such callers in the future. Author: Hou Zhijie Reviewed-by: David Rowley, Michael Paquier, Soumyadeep Chakraborty Discussion: https://www.postgresql.org/message-id/flat/OS0PR01MB5716A75A45BE46101A1B489894379@OS0PR01MB5716.jpnprd01.prod.outlook.com
This allows Param substitution to produce just the same result as writing a constant value literally would have done. While it hardly matters so far as the current core code is concerned, extensions might take more interest in node location fields. Julien Rouhaud Discussion: https://postgr.es/m/20170311220932.GJ15188@nol.local
Build farm animals running ancient HPUX and Solaris have a non-standard sigwait() from draft versions of POSIX, so they didn't like commit 7c09d27. To avoid the problem in general, only try to use sigwait() if it's declared by <signal.h> and matches the expected declaration. To select the modern declaration on Solaris (even in non-threaded programs), move -D_POSIX_PTHREAD_SEMANTICS into the right place to affect all translation units. Also fix the error checking. Modern sigwait() doesn't set errno. Thanks to Tom Lane for help with this. Discussion: https://postgr.es/m/3187588.1626136248%40sss.pgh.pa.us
A code path asserted that the archiver was dead, but a check made that impossible to happen. Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACW=CYE1ars+2XyPTEPq0wQvru4c0dPZ=Nrn3EqNBkksvQ@mail.gmail.com Backpatch-throgh: 14
There is a non-trivial amount of code that handles ZLIB compression in pg_receivewal, from basics like the format name, the calculation of the start streaming position and of course the compression itself, but there was no automated coverage for it. This commit introduces a set of conditional tests (if the build supports ZLIB) to cover the creation of ZLIB-compressed WAL segments, the handling of the partial, compressed, WAL segments and the compression operation in itself. Note that there is an extra phase checking the validity of the generated files by using directly a gzip command, passed down by the Makefile of pg_receivewal. This part is skipped if the command cannot be found, something likely going to happen on Windows with MSVC except if one sets the variable GZIP_PROGRAM in the environment of the test. This set of tests will become handy for upcoming patches that add more options for the compression methods used by pg_receivewal, like LZ4, to make sure that no existing facilities are broken. Author: Georgios Kokolatos Reviewed-by: Gilles Darold, Michael Paquier Discussion: https://postgr.es/m/07BK3Mk5aEOsTwGaY77qBVyf9GjoEzn8TMgHLyPGfEFPIpTEmoQuP2P4c7teesjSg-LPeUafsp1flnPeQYINMSMB_UpggJDoduB5EDYBqaQ=@protonmail.com
When reporting "conflicting or redundant options" errors, try to ensure that errposition() is used, to help the user identify the offending option. Formerly, errposition() was invoked in less than 60% of cases. This patch raises that to over 90%, but there remain a few places where the ParseState is not readily available. Using errdetail() might improve the error in such cases, but that is left as a task for the future. Additionally, since this error is thrown from over 100 places in the codebase, introduce a dedicated function to throw it, reducing code duplication. Extracted from a slightly larger patch by Vignesh C. Reviewed by Bharath Rupireddy, Alvaro Herrera, Dilip Kumar, Hou Zhijie, Peter Smith, Daniel Gustafsson, Julien Rouhaud and me. Discussion: https://postgr.es/m/CALDaNm33FFSS5tVyvmkoK2cCMuDVxcui=gFrjti9ROfynqSAGA@mail.gmail.com
This commit fixes the description of a couple of multirange operators and oprjoin for another multirange operator. The change of oprjoin is more cosmetic since both old and new functions return the same constant. These cosmetic changes don't worth catalog incompatibility between 14beta2 and 14beta3. So, catversion isn't bumped. Discussion: https://postgr.es/m/CAPpHfdv9OZEuZDqOQoUKpXhq%3Dmc-qa4gKCPmcgG5Vvesu7%3Ds1w%40mail.gmail.com Backpatch-throgh: 14
The OpenBSD implementation of gzip considers only files suffixed by "Z", "gz", "z", "tgz" or "taz" as valid targets, discarding anything else and making a command using --test exit with an error code of 512 if anything invalid is found. The test introduced in ffc9dda tested a WAL segment suffixed as .gz.partial, enough to make the test fail. Testing only a full segment is fine enough in terms of coverage, so simplify the code by discarding the .gz.partial segment in this check. This should be enough to make the test pass with OpenBSD environments. Per report from curculio. Discussion: https://postgr.es/m/YPAdf9r5aJbDoHoq@paquier.xyz
Autoconf's AC_CHECK_DECLS() always defines HAVE_DECL_whatever as 1 or 0, but some of the entries in msvc/Solution.pm showed such symbols as "undef" instead of 0. Fix that for consistency. There's no live bug in current usages AFAICS, but it's not hard to imagine one creeping in if more-complex #if tests get added. Back-patch to v13, which is as far back as Solution.pm contains this data. The inconsistency still exists in the manually-filled pg_config_ext.h.win32 files of older branches; but as long as the problem is only latent, it doesn't seem worth the trouble to clean things up there. Discussion: https://postgr.es/m/3185430.1626133592@sss.pgh.pa.us
As of v14, pg_depend contains almost 7000 "pin" entries recording the OIDs of built-in objects. This is a fair amount of bloat for every database, and it adds time to pg_depend lookups as well as initdb. We can get rid of all of those entries in favor of an OID range check, i.e. "OIDs below FirstUnpinnedObjectId are pinned". (template1 and the public schema are exceptions. Those exceptions are now wired into IsPinnedObject() instead of initdb's code for filling pg_depend, but it's the same amount of cruft either way.) The contents of pg_shdepend are modified likewise. Discussion: https://postgr.es/m/3737988.1618451008@sss.pgh.pa.us
Ensure to properly mark up function parameters in text with <parameter>, avoid using <acronym> for terms which aren't acronyms and properly place the ", and" in a value list. The acronym removal is a follow-up to commit fb72a7b which removed it for minmax-multi. In passing, also fix an incorrectly cased word. Author: Ekaterina Kiryanova <e.kiryanova@postgrespro.ru> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Discussion: https://postgr.es/m/c050ecbc-80b2-b360-3c1d-9fe6a6a11bb5@postgrespro.ru Backpatch-through: v14
As reported by buildfarm member bowerbird, those tests are unstable on Windows. The failure produced there points to a problem with gzflush(), that fails to sync a file freshly-opened, with a gzFile properly opened. While testing this myself with MSVC, I bumped into a different error where a file could simply not be opened, so this makes me rather doubtful that testing this area on Windows is a good idea if this finishes with random concurrency failures. This requires more investigation, and keeping this buildfarm member red is not a good thing in the long-term, so for now this just disables this set of tests on Windows. Discussion: https://postgr.es/m/YPDLz2x3o1aX2wRh@paquier.xyz
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.